The foundation models have recently shown excellent performance on a variety of downstream tasks in computer vision. However, most existing vision foundation models simply focus on image-level pretraining and adpation, which are limited for dynamic and complex video-level understanding tasks. To fill the gap, we present general video foundation models, InternVideo, by taking advantage of both generative and discriminative self-supervised video learning. Specifically, InternVideo efficiently explores masked video modeling and video-language contrastive learning as the pretraining objectives, and selectively coordinates video representations of these two complementary frameworks in a learnable manner to boost various video applications. Without bells and whistles, InternVideo achieves state-of-the-art performance on 39 video datasets from extensive tasks including video action recognition/detection, video-language alignment, and open-world video applications. Especially, our methods can obtain 91.1% and 77.2% top-1 accuracy on the challenging Kinetics-400 and Something-Something V2 benchmarks, respectively. All of these results effectively show the generality of our InternVideo for video understanding. The code will be released at https://github.com/OpenGVLab/InternVideo .
translated by 谷歌翻译
我们考虑强大的线性回归模型$ \ boldsymbol {y} = x \ beta^* + \ boldsymbol {\ eta} $,其中一个对手忽略了design $ x \ in \ mathbb {r}^r}^n \ times D } $可以选择$ \ boldsymbol {\ eta} $以损坏所有观测值的(可能消失的)$ \ boldsymbol {y} $以任意方式。最近的工作[DLN+21,DNS21]引入了有效的算法,以持续恢复参数矢量。这些算法至关重要地依赖于设计矩阵非常广泛(如果其列跨度远非任何稀疏矢量,矩阵就可以很好地扩展)。在本文中,我们表明存在一个缺乏良好性的设计矩阵家族,因此从理论上讲,在上述稳健线性回归模型中,参数向量的持续恢复是不可能的。我们进一步研究了随机矩阵的良好表现的平均案例时间复杂性。我们表明,如果观察值的数量在环境维度上是二次的,则可以有效地证明给定的$ n $ by-by-by-by-by-by-d $ d $ d $高斯矩阵是否会很好地扩展。当观察数为$ O(d^2)$时,我们通过显示出相同认证问题的计算硬度的严格证据来补充这一结果。
translated by 谷歌翻译
As the basis for prehensile manipulation, it is vital to enable robots to grasp as robustly as humans. In daily manipulation, our grasping system is prompt, accurate, flexible and continuous across spatial and temporal domains. Few existing methods cover all these properties for robot grasping. In this paper, we propose a new methodology for grasp perception to enable robots these abilities. Specifically, we develop a dense supervision strategy with real perception and analytic labels in the spatial-temporal domain. Additional awareness of objects' center-of-mass is incorporated into the learning process to help improve grasping stability. Utilization of grasp correspondence across observations enables dynamic grasp tracking. Our model, AnyGrasp, can generate accurate, full-DoF, dense and temporally-smooth grasp poses efficiently, and works robustly against large depth sensing noise. Embedded with AnyGrasp, we achieve a 93.3% success rate when clearing bins with over 300 unseen objects, which is comparable with human subjects under controlled conditions. Over 900 MPPH is reported on a single-arm system. For dynamic grasping, we demonstrate catching swimming robot fish in the water.
translated by 谷歌翻译
Deep reinforcement learning has achieved great success in various fields with its super decision-making ability. However, the policy learning process requires a large amount of training time, causing energy consumption. Inspired by the redundancy of neural networks, we propose a lightweight parallel training framework based on neural network compression, AcceRL, to accelerate the policy learning while ensuring policy quality. Specifically, AcceRL speeds up the experience collection by flexibly combining various neural network compression methods. Overall, the AcceRL consists of five components, namely Actor, Learner, Compressor, Corrector, and Monitor. The Actor uses the Compressor to compress the Learner's policy network to interact with the environment. And the generated experiences are transformed by the Corrector with Off-Policy methods, such as V-trace, Retrace and so on. Then the corrected experiences are feed to the Learner for policy learning. We believe this is the first general reinforcement learning framework that incorporates multiple neural network compression techniques. Extensive experiments conducted in gym show that the AcceRL reduces the time cost of the actor by about 2.0 X to 4.13 X compared to the traditional methods. Furthermore, the AcceRL reduces the whole training time by about 29.8% to 40.3% compared to the traditional methods while keeps the same policy quality.
translated by 谷歌翻译
Neural Radiance Field (NeRF), a new novel view synthesis with implicit scene representation has taken the field of Computer Vision by storm. As a novel view synthesis and 3D reconstruction method, NeRF models find applications in robotics, urban mapping, autonomous navigation, virtual reality/augmented reality, and more. Since the original paper by Mildenhall et al., more than 250 preprints were published, with more than 100 eventually being accepted in tier one Computer Vision Conferences. Given NeRF popularity and the current interest in this research area, we believe it necessary to compile a comprehensive survey of NeRF papers from the past two years, which we organized into both architecture, and application based taxonomies. We also provide an introduction to the theory of NeRF based novel view synthesis, and a benchmark comparison of the performance and speed of key NeRF models. By creating this survey, we hope to introduce new researchers to NeRF, provide a helpful reference for influential works in this field, as well as motivate future research directions with our discussion section.
translated by 谷歌翻译
缺失痕迹的插值和重建是地震数据处理的关键步骤,此外,这也是一个高度不良的问题,尤其是对于复杂的情况,例如高比率随机离散丢失,连续缺失和缺失,富含断层或盐分身体调查。这些复杂的案例在当前作品中很少提及。为了应对复杂的缺失案例,我们提出了一种新型的3-D GAN框架的多维对抗GAN(MDA GAN)。它可以在3D复合物使用三个歧视器缺少重建后,保持数据的各向异性和空间连续性。该功能缝合模块的设计并嵌入到发电机中,以保留更多输入数据的信息。 TANH横熵(TCE)损失是得出的,该损失为生成器提供了最佳的重建梯度,以使生成的数据更加平滑且连续。我们通过实验验证了研究的各个组件的有效性,然后在多个可公开的数据上测试了该方法。该方法实现了多达95%的随机离散缺失和100个连续缺失的痕迹的合理重建。在断层和盐体富含调查中,MDA GAN仍然为复杂病例带来令人鼓舞的结果。在实验上,已经证明,在简单和复杂的情况下,我们的方法的性能要比其他方法更好。https://github.com/douyimin/mda_gan
translated by 谷歌翻译
透明的物体在我们的日常生活中很常见,并且经常在自动生产线中处理。对这些物体的强大基于视力的机器人抓握和操纵将对自动化有益。但是,在这种情况下,大多数当前的握把算法都会失败,因为它们严重依赖于深度图像,而普通的深度传感器通常无法产生准确的深度信息,因为由于光的反射和折射,它们都会用于透明对象。在这项工作中,我们通过为透明对象深度完成的大规模现实世界数据集提供了解决此问题,该数据集包含来自130个不同场景的57,715个RGB-D图像。我们的数据集是第一个大规模的,现实世界中的数据集,可提供地面真相深度,表面正常,透明的面具,以各种各样的场景和混乱。跨域实验表明,我们的数据集更具通用性,可以为模型提供更好的概括能力。此外,我们提出了一个端到端深度完成网络,该网络将RGB图像和不准确的深度图作为输入,并输出精制的深度图。实验证明了我们方法的效率,效率和鲁棒性优于以前的工作,并且能够处理有限的硬件资源下的高分辨率图像。真正的机器人实验表明,我们的方法也可以应用于新颖的透明物体牢固地抓住。完整的数据集和我们的方法可在www.graspnet.net/transcg上公开获得
translated by 谷歌翻译
在这项研究中,我们提出了一种基于对比学习的特征提取框架,其适应性阳性和阴性样本(CL-FEFA)适用于无监督,监督和半监督单视图特征提取。 CL-FEFA自适应地构建来自特征提取结果的正面和阴性样品,这使得更合适和准确。此后,根据先前的正和阴性样品重新提取歧视特征,以根据阳性和阴性样品进行InfoNce损失,这将使阶级样品更紧凑,并且阶级样品更大分散。同时,使用子空间样本的潜在结构信息动态构建正面和负样本可以使我们的框架更加强大地嘈杂的数据。此外,CL-FEFA考虑阳性样本之间的相互信息,即潜在结构中的类似样品,为其在特征提取中提供了理论支持。最终的数值实验证明,该框架具有与传统特征提取方法和对比学习方法的强大优势。
translated by 谷歌翻译
Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on two datasets with different organs and modalities, where it substantially outperforms existing techniques.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译